In this notebook, a template is provided for you to implement your functionality in stages which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission, if necessary. Sections that begin with 'Implementation' in the header indicate where you should begin your implementation for your project. Note that some sections of implementation are optional, and will be marked with 'Optional' in the header.
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
# Load pickled data
import pickle
# Fill this in based on where you saved the training and testing data
training_file = 'traffic-signs-data/train.p'
testing_file = 'traffic-signs-data/test.p'
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_test, y_test = test['features'], test['labels']
import csv
import pprint
sign_dict = {};
with open('signnames.csv', 'r') as f:
reader = csv.reader(f)
for row in reader:
try:
sign_dict.update({int(row[0]): row[1]})
except ValueError:
print("Could not convert data to an integer.")
pp = pprint.PrettyPrinter()
pp.pprint(sign_dict)
The pickled data is a dictionary with 4 key/value pairs:
'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).'labels' is a 2D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.'sizes' is a list containing tuples, (width, height) representing the the original width and height the image.'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGESComplete the basic data summary below.
### Replace each question mark with the appropriate value.
import numpy
# Number of training examples
n_train = len(X_train)
# Number of testing examples.
n_test = len(X_test)
# What's the shape of an traffic sign image?
image_shape = X_train[0].shape
# How many unique classes/labels there are in the dataset.
n_classes = len(numpy.unique(y_train))
print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
assert(len(X_train) == len(y_train))
assert(len(X_test) == len(y_test))
print()
print("Image Shape: {}".format(X_train[0].shape))
print()
print("Training Set: {} samples".format(len(X_train)))
print("Test Set: {} samples".format(len(X_test)))
Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.
The Matplotlib examples and gallery pages are a great resource for doing visualizations in Python.
NOTE: It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections.
### Data exploration visualization goes here.
### Feel free to use as many code cells as needed.
import random
import numpy as np
import matplotlib.pyplot as plt
# Visualizations will be shown in the notebook.
%matplotlib inline
for i in range(0,3):
index = random.randint(0, len(X_train))
image = X_train[index].squeeze()
plt.figure(figsize=(1,1))
plt.imshow(image)
plt.tick_params(
axis='x', # changes apply to the x-axis
labelbottom='off') # labels along the bottom edge are off
plt.tick_params(
axis='y', # changes apply to the x-axis
labelleft='off') # labels along the bottom edge are off
print("This is class " + str(y_train[index]) + ": " + sign_dict[y_train[index]])
plt.show()
u, indices = np.unique(y_train, return_index=True)
for i in range(0,len(u)):
num = np.sum(y_train == i);
print("class {} ({}) has {} samples".format(i,sign_dict[i],num))
image = X_train[indices[i]].squeeze()
plt.figure(figsize=(1,1))
plt.tick_params(axis='x',labelbottom='off')
plt.tick_params(axis='y',labelleft='off')
plt.imshow(image)
plt.show()
Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sign Dataset.
There are various aspects to consider when thinking about this problem:
Here is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.
NOTE: The LeNet-5 implementation shown in the classroom at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!
Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.
import matplotlib
import scipy.misc
import numpy as np
def plot_image(image, caption, gray=False):
plt.figure(figsize=(1,1))
plt.tick_params(axis='x',labelbottom='off')
plt.tick_params(axis='y',labelleft='off')
if gray:
plt.imshow(image, cmap="gray")
else:
plt.imshow(image)
plt.title(caption)
plt.show()
def mod_lighting(img, offset_saturation = -1, offset_lightness = -1):
# Farbwert als Farbwinkel H auf dem Farbkreis (etwa 0° für Rot, 120° für Grün, 240° für Blau)
# Sättigung S in Prozent (0 % = Neutralgrau, 50 % = wenig gesättigte Farbe, 100 % = gesättigte, reine Farbe) oder in einem Intervall von Null bis Eins
# Hellwert V als Prozentsatz (0 % = keine Helligkeit, 100 % = volle Helligkeit), oder in einem Intervall von Null bis Eins, auch Dunkelstufe genannt.
if offset_saturation == -1:
offset_saturation = np.random.randn()*0.1;
if offset_lightness == -1:
offset_lightness = np.random.randn()*0.3;
img_hsv = matplotlib.colors.rgb_to_hsv(img);
img_hsv[:,:,1] = np.maximum( np.minimum(img_hsv[:,:,1]+offset_saturation, 1.0), 0.0);
img_hsv[:,:,2] = np.maximum( np.minimum(img_hsv[:,:,2]+offset_lightness, 1.0), 0.0);
img_rgb = matplotlib.colors.hsv_to_rgb(img_hsv);
return img_rgb
def mod_blur(img):
# requires python package "Pillow"
blur = np.random.randn()*2;
img_blur = scipy.misc.imfilter(img, 'smooth')
return img_blur
def mod_rotate(img):
# requires python package "Pillow"
#angle = np.random.randn()*15;
angle = np.random.uniform(-15, 15);
img_rot = scipy.misc.imrotate(img, angle)
return img_rot
def mod_grayscale(img):
img_gray = np.dot(img, [0.299, 0.587, 0.114])
return img_gray
#def normalize_grayscale(img):
# a = 0.1
# b = 0.9
# x_min = np.min(image_data);
# x_max = np.max(image_data);
# image_data_new = a + (image_data-x_min)*(b-a) / (x_max - x_min);
# return image_data_new
def normalize_rgb(image_data):
raise "Not implemented"
# rgb2yuv is only avaiable in version 0.13dev of scipy-image
from scipy import linalg
yuv_from_rgb = np.array([[ 0.299 , 0.587 , 0.114 ],
[-0.14714119, -0.28886916, 0.43601035 ],
[ 0.61497538, -0.51496512, -0.10001026 ]])
rgb_from_yuv = linalg.inv(yuv_from_rgb)
def rgb2yuv(rgb):
return skimage.color.colorconv._convert(yuv_from_rgb, rgb)
def yuv2rgb(yuv):
return skimage.color.colorconv._convert(rgb_from_yuv, yuv)
import skimage
from sklearn.model_selection import train_test_split
from sklearn.utils import shuffle
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_test, y_test = test['features'], test['labels']
# Get randomized datasets for training and validation
X_train, X_valid, y_train, y_valid = train_test_split(
X_train.astype(float)/255.0,
y_train,
test_size=0.05,
random_state=832289)
# Shuffle training data
X_train, y_train = shuffle(X_train, y_train)
# Generate additional images
N_ADDITIONAL_IMG = 100000
switcher = {
0: mod_lighting,
1: mod_rotate,
2: mod_blur,
}
n_train = len(X_train)
X_train_new = np.empty((X_train.shape[0] + N_ADDITIONAL_IMG + 1, *X_train.shape[1:end]))
X_train_new[1:n_train+1, :, :, :] = X_train;
y_train_new = np.empty(len(y_train) + N_ADDITIONAL_IMG + 1)
y_train_new[1:n_train+1] = y_train;
for i in range(0,N_ADDITIONAL_IMG):
index = random.randint(0, n_train-1)
# randomly choose between different image modifiers
func = switcher.get(random.randint(0,2))
label_new = y_train[index];
img_new = func(X_train[index])
X_train_new[n_train + i + 1, :, :, :] = img_new;
y_train_new[n_train + i + 1] = label_new;
# X_train = np.concatenate((X_train, [img_new]), axis=0)
# y_train = np.append(y_train, label_new)
if not index % 500:
print(".", end="", flush=True)
# plot_image(img_new, "new") # uncomment to see generated images
X_train = X_train_new
y_train = y_train_new
print()
print("Image Shape: {}".format(X_train[0].shape))
print()
print("Training Set: {} samples (original samples {})".format(len(X_train), n_train))
print("Validation Set: {} samples".format(len(X_valid)))
print("Test Set: {} samples".format(len(X_test)))
def preprocess_img(img):
img = rgb2yuv(img) # convert to YUV colorspace
img[:,:,0] = img[:,:,0] - 0.5; # remove mean
return img
for index in range(len(X_train)):
X_train[index] = preprocess_img(X_train[index])
for index in range(len(X_valid)):
X_valid[index] = preprocess_img(X_valid[index])
X_test = X_test.astype(float) / 255.0;
for index in range(len(X_test)):
X_test[index] = preprocess_img(X_test[index])
print("Done converting to YUV colorspace")
### Define your architecture here.
### Feel free to use as many code cells as needed.
import tensorflow as tf
from tensorflow.contrib.layers import flatten
# Convolution
# SAME:
# new_height = (input_height - filter_height + 2 * P)/S + 1
# new_width = (input_width - filter_width + 2 * P)/S + 1
# VALID:
# out_height = ceil(float(in_height - filter_height + P*1) / float(S))
# out_width = ceil(float(in_width - filter_width + P*1) / float(S))
#
# Max Pool:
# new_height = (input_height - filter_height)/S + 1
# new_width = (input_width - filter_width)/S + 1
#
# out_height = ceil(float(in_height - filter_height + 1) / float(strides[1]))
# out_width = ceil(float(in_width - filter_width + 1) / float(strides[2]))
def LeNet(x, dropout):
# Hyperparameters
mu = 0
sigma = 0.1
# Layer 1: Convolutional. Input = 32x32x3. Output = 28x28x6.
# new_height = (32 - 5 + 1)/1 = 28
# new_width = (32 - 5 + 1)/1 = 28
conv1_weights = tf.Variable(tf.truncated_normal(shape=(5, 5, 3, 6), mean=mu, stddev=sigma), name='conv1w') # (height, width, input_depth, output_depth)
conv1_bias = tf.Variable(tf.zeros(6), name='conv1b')
strides = [1, 1, 1, 1] # (batch, height, width, depth)
padding = 'VALID'
conv1 = tf.nn.conv2d(x, conv1_weights, strides, padding) + conv1_bias
# x = tf.nn.bias_add(x, b)
# Activation.
conv1_activation = tf.nn.relu(conv1)
# Pooling. Input = 28x28x6. Output = 14x14x6.
ksize = [1, 2, 2, 1]
strides = [1, 2, 2, 1] # (batch_size, height, width, depth)
padding = 'VALID'
conv1_pooling = tf.nn.max_pool(conv1_activation, ksize, strides, padding)
# Layer 2: Convolutional. Output = 10x10x16.
# new_height = (14 - 5 + 2 * 0)/1 + 1 = 10
# new_width = (14 - 5 + 2 * 0)/1 + 1 = 10
conv2_weights = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean=mu, stddev=sigma), name='conv2w') # (height, width, input_depth, output_depth)
conv2_bias = tf.Variable(tf.zeros(16), name='conv2b')
strides = [1, 1, 1, 1] # (batch, height, width, depth)
padding = 'VALID'
conv2 = tf.nn.conv2d(conv1_pooling, conv2_weights, strides, padding) + conv2_bias
# Activation.
conv2_activation = tf.nn.relu(conv2)
# Pooling. Input = 10x10x16. Output = 5x5x16.
# new_height = (10 - 2)/2 + 1 = 5
# new_width = (10 - 2)/2 + 1 = 5
ksize = [1, 2, 2, 1]
strides = [1, 2, 2, 1] # (batch_size, height, width, depth)
padding = 'VALID'
conv2_pooling = tf.nn.max_pool(conv2_activation, ksize, strides, padding)
# Flatten. Input = 5x5x16. Output = 400.
fully_connected1 = flatten(conv2_pooling)
# Layer 3: Fully Connected. Input = 400. Output = 120.
fc1_weights = tf.Variable(tf.truncated_normal([400, 120], mean=mu, stddev=sigma), name='fc1w')
fc1_biases = tf.Variable(tf.zeros(120), name='fc1b')
fully_connected1 = tf.matmul(fully_connected1, fc1_weights) + fc1_biases
# Activation.
fully_connected1_activation = tf.nn.relu(fully_connected1)
# Dropout
fully_connected1_activation = tf.nn.dropout(fully_connected1_activation, dropout)
# Layer 4: Fully Connected. Input = 120. Output = 84.
fc2_weights = tf.Variable(tf.truncated_normal([120, 84], mean=mu, stddev=sigma), name='fc2w')
fc2_biases = tf.Variable(tf.zeros(84), name='fc2b')
fully_connected2 = tf.matmul(fully_connected1_activation, fc2_weights) + fc2_biases
# Activation.
fully_connected2_activation = tf.nn.relu(fully_connected2)
# Layer 5: Fully Connected. Input = 84. Output = 43.
fc3_weights = tf.Variable(tf.truncated_normal([84, n_classes], mean=mu, stddev=sigma), name='fc3w')
fc3_biases = tf.Variable(tf.zeros(n_classes), name='fc3b')
fully_connected3 = tf.matmul(fully_connected2_activation, fc3_weights) + fc3_biases
logits = fully_connected3
return logits
keep_prob = tf.placeholder(tf.float32) # dropout (keep probability)
x = tf.placeholder(tf.float32, (None, 32, 32, 3))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, n_classes)
EPOCHS = 14
BATCH_SIZE = 128*4
dropout_keep_rate = 0.75;
LEARING_RATE = 0.001
logits = LeNet(x, keep_prob)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, one_hot_y)
softmax = tf.nn.softmax(logits)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = LEARING_RATE)
training_operation = optimizer.minimize(loss_operation)
pred_operation = tf.argmax(logits, 1)
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
top_k_operation = tf.nn.top_k(softmax, k=5, sorted=True, name=None)
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: 1.})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
### Train your model here.
### Feel free to use as many code cells as needed.
# Measurements use for graphing loss and accuracy
TRACK_ACCURACY = False # training takes very long with TRACK_ACCUCARCY
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
# Run optimizer and get loss
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: dropout_keep_rate})
# Calculate Training and Validation accuracy
loss_current_batch = sess.run(loss_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: 1.})
if TRACK_ACCURACY:
# Ran out of memory trying to allocate 1.77GiB.
# The caller indicates that this is not a failure,
# but may mean that there could be performance gains if more memory is available.
training_accuracy = sess.run(accuracy_operation, feed_dict={x: X_train, y: y_train, keep_prob: 1.})
validation_accuracy = sess.run(accuracy_operation, feed_dict={x: X_valid, y: y_valid, keep_prob: 1.})
else:
training_accuracy = 0
validation_accuracy = 0
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(loss_current_batch)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
validation_accuracy = evaluate(X_valid, y_valid)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, 'lenet_traffic_sign')
print("Model saved")
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy at {}'.format(validation_accuracy))
# Trial 1: Validation accuracy at 0.9330193506741885 - RGB
# Trail 2: Validation accuracy at 0.9105180509735439 - 2x Neurons
# Trial 3: Validation accuracy at 0.1988781257772372 - YUV
# Trial 4: Validation accuracy at 0.4237633793558289 - YUV/20000
# Trial 5: Validation accuracy at 0.9362570046770640 - YUV/normal
# Trial 6: Validation accuracy at 0.957 - YUV/normal + dropout = 0.75
# Trial 7: Validation accuracy at 0.945 - 100k preprocessed images
# Trial 8: Validation accuracy at 0.9439061580112794 - final
Describe how you preprocessed the data. Why did you choose that technique?
Answer:
All images are converted to YUV colorspace as pointed out in the LeCun paper. By converting to YUV colorspace the intensity (Y, luma) is separated from the color channels (U,V). The CovNet has therefore the ability to extract information from the grayscale image provided by the Y-channel. Also 0.5 is substracted from the Y-channel to remove the mean (rgb2yuv returns y with 0 < y < 1).
Describe how you set up the training, validation and testing data for your model. Optional: If you generated additional data, how did you generate the data? Why did you generate the data? What are the differences in the new dataset (with generated data) from the original dataset?
Answer:
The training data is split into a validation set and a training set with ratio of 5% validation data. Additional training data is generated by selecting an existing image randomly and applying one of three filters randomly:
Additional N_ADDITIONAL_IMG = 100000 samples are created. These additional images are used in order to train a more robust CovNet that is less sensitive to the disturbances generated by the filters. More filters could be added, especially shifting and zooming should be added.
### Generate data additional data (OPTIONAL!)
### and split the data into training/validation/testing sets here.
### Feel free to use as many code cells as needed.
plot_image(train['features'][3]/255.0, "original")
plot_image(mod_lighting( train['features'][3] ), "lighting")
plot_image(mod_blur( train['features'][3] ), "blur")
plot_image(mod_rotate( train['features'][3] ), "rotate")
plot_image(mod_grayscale( train['features'][3] ), "grayscale", gray=True) # not used
What does your final architecture look like? (Type of model, layers, sizes, connectivity, etc.) For reference on how to build a deep neural network using TensorFlow, see Deep Neural Network in TensorFlow from the classroom.
Answer:
The neural network uses the LeCun-5 architecture almost without changes. I tried to duplicate the number of neurons in the fully connected layers, but the validation accuracy droped from 93.3% to 91.0%. Other modifications didn't show improvements, so I left the architecture mostly unmodified. Implementing dropout for the fist fully connected layer showed some improvements though. So a dropout with keep_rate of 0.75 was implemented.
**NEW:** The applied LeCun-5 architecture consists of two convolutional layers and three fully connected layers. In detail the architecture looks like this:
How did you train your model? (Type of optimizer, batch size, epochs, hyperparameters, etc.)
Answer:
For optimization Adam optimizer is used to minimize cross entropy that is calculated from logits after applying the softmax function. A learning rate of 0.001 shows good optimization progress with good stability. Choosing more then 14 epochs didn't improve the prediciton accurancy on the validation set, so 14 seems to be a reasonable number. The batch size was adopted from the previous lab and multiplied by 4 (I trained on AWS cloud).
What approach did you take in coming up with a solution to this problem? It may have been a process of trial and error, in which case, outline the steps you took to get to the final solution and why you chose those steps. Perhaps your solution involved an already well known implementation or architecture. In this case, discuss why you think this is suitable for the current problem.
Answer:
I started with implementing the LeCun-5 architecture since it has been proven to show good results in digit recognition. Hyperparameters were mostly choosen from lectures and previous labs. I then started to introduce several modifications to the network, especially how data is feed into the neural network. Playing around with colorspaces showed to have a significant impact on the overall result. The approach described by LeCun for traffic sign regocnition also feeds features from lower convolutional layers into the fully connected layer (2nd stage), this was not implemented but could be the reason why a much higher accurancy of 99% is reached by the architecture described in the paper.
Take several pictures of traffic signs that you find on the web or around you (at least five), and run them through your classifier on your computer to produce example results. The classifier might not recognize some local signs but it could prove interesting nonetheless.
You may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name.
Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
plot_image(X_test[1,:,:,0], "1st test image", gray=True)
with tf.Session() as sess:
# lenet_traffic_sign
#saver.restore(sess, tf.train.latest_checkpoint('.'))
saver.restore(sess, './lenet_traffic_sign')
test_accuracy = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
Choose five candidate images of traffic signs and provide them in the report. Are there any particular qualities of the image(s) that might make classification difficult? It could be helpful to plot the images in the notebook.
Answer:
Trafic signs were extracted from the following photographs:

NEW: The choosen images might be difficult to classify for the following reasons:



</span>
One image were included that is impossible to classify because it is unknown to the network

### Run the predictions here.
### Feel free to use as many code cells as needed.
import os
import numpy as np
from scipy import misc
X_test2 = np.empty((0,32,32,3));
X_test2_org = np.empty((0,32,32,3));
directory = 'data-tw';
for filename in sorted(os.listdir(directory)):
if filename.endswith(".png") and not filename == "overview.png":
filepath = os.path.join(directory, filename);
print( "Loading " + filepath);
img = misc.imread(filepath)
img = misc.imresize(img,(32,32))
img = img[:,:,0:3]
img_preprocess = preprocess_img(img);
X_test2 = np.concatenate((X_test2, [img_preprocess]), axis=0)
X_test2_org = np.concatenate((X_test2_org, [img]), axis=0)
X_test2.shape
def predict(X_data):
sess = tf.get_default_session()
if X_data.ndim == 3:
X_data = [X_data];
pred = sess.run(pred_operation, feed_dict={x: X_data, keep_prob: 1.});
return pred
with tf.Session() as sess:
saver.restore(sess, './lenet_traffic_sign')
print("Model restored.")
pred = predict(X_test2)
print(pred)
for i in range(len(X_test2)):
plot_image(X_test2_org[i,:,:,:]/255.0,"[{}] This is '{}' ({})".format(i, sign_dict[pred[i]], pred[i]));
Is your model able to perform equally well on captured pictures when compared to testing on the dataset? The simplest way to do this check the accuracy of the predictions. For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate.
NOTE: You could check the accuracy manually by using signnames.csv (same directory). This file has a mapping from the class id (0-42) to the corresponding sign name. So, you could take the class id the model outputs, lookup the name in signnames.csv and see if it matches the sign from the image.
Answer:
The model shows good prediction results. Out of 20 photographs 15 are classified correctly. Two road signs were classified that are not included in the training set and are therefor unknown to the CovNet. Substracting these two images out of 18 three photographs were falsely classified (No. 4, No. 15 and No. 16). The model is 83% accurate. The reason for false classification could be a different zoom in case of No. 4 (No. 5 is classified correctly!) and missing training data in case of No. 16, which is actually not a german road sign (using a different font).
Use the model's softmax probabilities to visualize the certainty of its predictions, tf.nn.top_k could prove helpful here. Which predictions is the model certain of? Uncertain? If the model was incorrect in its initial prediction, does the correct prediction appear in the top k? (k should be 5 at most)
tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.
Take this numpy array as an example:
# (5, 6) array
a = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497,
0.12789202],
[ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401,
0.15899337],
[ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 ,
0.23892179],
[ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 ,
0.16505091],
[ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137,
0.09155967]])
Running it through sess.run(tf.nn.top_k(tf.constant(a), k=3)) produces:
TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202],
[ 0.28086119, 0.27569815, 0.18063401],
[ 0.26076848, 0.23892179, 0.23664738],
[ 0.29198961, 0.26234032, 0.16505091],
[ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5],
[0, 1, 4],
[0, 5, 1],
[1, 3, 5],
[1, 4, 3]], dtype=int32))
Looking just at the first row we get [ 0.34763842, 0.24879643, 0.12789202], you can confirm these are the 3 largest probabilities in a. You'll also notice [3, 0, 5] are the corresponding indices.
Answer:
### Visualize the softmax probabilities here.
### Feel free to use as many code cells as needed.
with tf.Session() as sess:
saver.restore(sess, './lenet_traffic_sign')
print("Model restored.")
rank = sess.run(top_k_operation, feed_dict={x: X_test2, keep_prob: 1.})
print(rank)
def plot_image2(ax, image):
ax.tick_params(axis='x',labelbottom='off')
ax.tick_params(axis='y',labelleft='off')
ax.imshow(image)
for i in range(len(X_test2)):
rank_img_val = rank.values[i,:];
rank_img_idx = rank.indices[i,:];
fig = plt.figure(num=None, figsize=(16, 4), dpi=90, facecolor='w', edgecolor='k')
ax1 = fig.add_subplot(1, 2, 1)
plot_image2(ax1, X_test2_org[i,:,:,:]/255.0);
ax1.set_title("[{}] This is '{}' ({})".format(i, sign_dict[pred[i]], pred[i]))
ax2 = fig.add_subplot(1, 2, 2)
ax2.barh(range(5), rank_img_val)
# ax2.set_xscale('log')
ax2.set_yticks(range(5))
ax2.set_yticklabels([sign_dict[x] for x in rank_img_idx]);
ax2.set_ylim(-1,6)
pos1 = ax2.get_position()
pos2 = [pos1.x0 + 0.1, pos1.y0, pos1.width / 1.2, pos1.height]
ax2.set_position(pos2)
ax2.set_title("Top 5 softmax probabilities for picture {}".format(i))
In most cases the model has high certainty. In the case of falsely classified photograph No. 15 the correct class shows up at the second place. For No. 16 the correct class is in the top 5. For No. 4 the correct class is not in the top 5. This might be due to the fact that the road sign occupies only a small space in the photograph. Better cropping or training the CovNet with differently scaled images could improve the result. For the unknown No. 14 the softmax probabilities are not well seperated as expected.
Note: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.